Skip to content


ai  101  pytorch  classification  nvidia  cuda  install  tensorrt  yolo  ardupilot  None  ros2  dds  micro ros  xrce  sitl  plugin  SITL  debug  rangefinder  pymavlink  mavros  gazebo  distance sensor  system_time  timesync  cmake  gtest  ctest  cpp  c++  format  fmt  multithreading  spdlog  camera  coordinate system  orb  matching  opencv  build  transformation  computer vision  homography  optical flow  of  trackers  cv  cyclonedds  eprosima  fastdds  simulation  config  ignition  bridge  sdf  tips  ign-transport  sensors  lidar  aptly  apt  encryption  pgp  docker  git  bundle  submodules  github  hooks  pre-commit  lxd  container  lxc  x11  profile  vscode  marpit  presentation  marp  markdown  mermaid  video  ffmpeg  gstreamer  cheat-sheet  sdp  v4l2loopback  gi  snippets  cheat Sheet  python  asyncio  future  click  cli  numpy  project  template  black  isort  docs  project document  docstrings  flake8  linter  git-hook  mypy  unittest  pytest  pylint  from a-z  mock  iterator  generator  logging  tuple  namedtuple  typing  annotation  typever  pyzmq  zmq  msgpack  action  namespace  remap  control2  ros2_control  gdb  qos  tag  plugins  msg  node  zero-copy  shm  tutorial  algorithm  calibration  diff  pid  dev  colcon  colcon_cd  rpi  arm  qemu  settings  behavior  plot  visualization  debugging  diagnostic  diagnostics  tutorials  gst  math  apm  rat_runtime_monitor  web  rosbridge  vue  binding  discovery  gazebo-classic  launch  spawn  cook  gps  imu  ray  gazebo_ros_ray_sensor  ultrsonic  range  ultrasonic  gazebo classic  wrench  effort  odom  ign  gz  xacro  ros_ign  diff_drive  odometry  joint_state  argument  OpaqueFunction  DeclareLaunchArgument  LaunchConfiguration  tmux  nav  slam  test  rclpy  action client  custom messages  executor  MultiThreadedExecutor  SingleThreadedExecutor  param  dynamic-reconfigure  service  client  setup.py  package.xml  parameter  parameters  custom  msgs  executers  pub  sub  rqt  rviz  rviz2  pose  marker  tf2  deb  package  setup  local_setup  rosdep  package manager  project settings  vcstool  cross-compiler  nano  texture  tmuxp  rootfs  embedded  zah  linux  rm  ubuntu  sudo  sudoers  nopasswd  visudo  udev  ip  ss  network  netstat  snap  deploy  ssh  systemd  mkdocs  extensions  socat  networking  serial  udp  tc  mtu  select  px4  robotics  kalman_filter  kalman  filter  control  todo  vscode-ext  json  yaml  schema  yocto  poky  world  gazebo_ros2_control  position_controller  effort_controller  velocity_controller  urdf  gazebo_ros_force  gazebo_ros_joint_state_publisher  robot_state_publisher  joint_state_publisher  projects  vrx  buoyancy 

Table of Content

SiamMask#

SiamMask used to tracking and segment objects from videos in each frame, initializing a single bounding box and outputing binary segmentation mask and rotated objects boxes

SiamMask needs to be initialized with a single bounding box so it can track the desired object. However, this also means that multiple object tracking (MOT) is not viable with SiamMask


alias#

  • VOT: Visual Object Tracking
  • MOT: Multiple Object Tracking
  • VOS: Video Object Segmentation
  • SOT: Signal Object Tracking

Demo#

  • Clone project
  • Setup environment
  • Download model
  • Run
clone
git clone https://github.com/augmentedstartups/SiamMask.git

!!! Note title=”requirements.txt” Running different python package version from github settings

Cython==0.29.28
colorama==0.4.4
numpy==1.21.5
requests==2.22.0
fire==0.4.0
torch==1.5.1+cu101
matplotlib==3.5.1
numba==0.55.1
scipy==1.7.3
h5py==3.6.0
pandas==1.4.0
tqdm==4.64.0
tensorboardX==2.5
torchvision==0.6.1+cu101

**build from source**
opencv 4.5.4
cd SiamMask
bash make.sh
setp env
# Add SiamMask root project and demo folder siammask_sharp folder to PYTHONPATH
cd SiamMask
export PYTHONPATH=`pwd`:$PYTHONPATH

cd experiments/siammask_sharp
export PYTHONPATH=`pwd`:$PYTHONPATH
download models
cd SiamMask/experiments/siammask_sharp
wget http://www.robots.ox.ac.uk/~qwang/SiamMask_VOT.pth
wget http://www.robots.ox.ac.uk/~qwang/SiamMask_DAVIS.pth

bash title="run python ../../tools/demo.py --resume SiamMask_DAVIS.pth --config config_davis.json

Note

DAVIS_2016: Video object segmentation dataset
DAVIS16 is a dataset for video object segmentation which consists of 50 videos in total (30 videos for training and 20 for testing). Per-frame pixel-wise annotations are offered.
dataset


Reference#

To check#

SenseTime Research platform for single object tracking, implementing algorithms like SiamRPN and SiamMask - pysot